Search Results: "bremner"

29 November 2009

David Bremner: Remove invocation of sbuild

So this is in some sense a nadir for shell scripting. 2 lines that do something out of 111. Mostly cargo-culted from cowpoke by ron, but much less fancy. rsbuild foo.dsc should do the trick.
#!/bin/sh
# Start a remote sbuild process via ssh. Based on cowpoke from devscripts.
# Copyright (c) 2007-9 Ron  <ron@debian.org>
# Copyright (c) David Bremner 2009 <david@tethera.net>
#
# Distributed according to Version 2 or later of the GNU GPL.
BUILDD_HOST=sbuild-host
BUILDD_DIR=var/sbuild   #relative to home directory
BUILDD_USER=""
DEBBUILDOPTS="DEB_BUILD_OPTIONS=\"parallel=3\""
BUILDD_ARCH="$(dpkg-architecture -qDEB_BUILD_ARCH 2>/dev/null)"
BUILDD_DIST="default"
usage()
 
    cat 1>&2 <<EOF
rsbuild [options] package.dsc
  Uploads a Debian source package to a remote host and builds it using sbuild.
  The following options are supported:
   --arch="arch"         Specify the Debian architecture(s) to build for.
   --dist="dist"         Specify the Debian distribution(s) to build for.
   --buildd="host"       Specify the remote host to build on.
   --buildd-user="name"  Specify the remote user to build as.
  The current default configuration is:
   BUILDD_HOST = $BUILDD_HOST
   BUILDD_USER = $BUILDD_USER
   BUILDD_ARCH = $BUILDD_ARCH
   BUILDD_DIST = $BUILDD_DIST
  The expected remote paths are:
  BUILDD_DIR  = $BUILDD_DIR
  sbuild must be configured on the build host.  You must have ssh
  access to the build host as BUILDD_USER if that is set, else as the
  user executing cowpoke or a user specified in your ssh config for
  '$BUILDD_HOST'.  That user must be able to execute sbuild.
EOF
    exit $1
 
PROGNAME="$(basename $0)"
version ()
 
    echo \
"This is $PROGNAME, version 0.0.0
This code is copyright 2007-9 by Ron <ron@debian.org>, all rights reserved.
Copyright 2009 by David Bremner <david@tethera.net>, all rights reserved.

This program comes with ABSOLUTELY NO WARRANTY.
You are free to redistribute this code under the terms of the
GNU General Public License, version 2 or later"
    exit 0
 
for arg; do
    case "$arg" in
        --arch=*)
            BUILDD_ARCH="$ arg#*= "
            ;;
        --dist=*)
            BUILDD_DIST="$ arg#*= "
            ;;
        --buildd=*)
            BUILDD_HOST="$ arg#*= "
            ;;
        --buildd-user=*)
            BUILDD_USER="$ arg#*= "
            ;;
        --dpkg-opts=*)
            DEBBUILDOPTS="DEB_BUILD_OPTIONS=\"$ arg#*= \""
            ;;
        *.dsc)
            DSC="$arg"
            ;;
        --help)
            usage 0
            ;;
        --version)
            version
            ;;
        *)
            echo "ERROR: unrecognised option '$arg'"
            usage 1
            ;;
    esac
done
dcmd rsync --verbose --checksum $DSC $BUILDD_USER$BUILDD_HOST:$BUILDD_DIR
ssh -t  $BUILDD_HOST "cd $BUILDD_DIR && $DEBBUILDOPTS sbuild --arch=$BUILDD_ARCH --dist=$BUILDD_DIST $DSC"

18 October 2009

David Bremner: Counting symbols from a debian symbols file

I am currently making a shared library out of some existing C code, for eventual inclusion in Debian. Because the author wasn't thinking about things like ABIs and APIs, the code is not too careful about what symbols it exports, and I decided clean up some of the more obviously private symbols exported. I wrote the following simple script because I got tired of running grep by hand. If you run it with
 grep-symbols symbolfile  *.c
It will print the symbols sorted by how many times they occur in the other arguments.
#!/usr/bin/perl
use strict;
use File::Slurp;
my $symfile=shift(@ARGV);
open SYMBOLS, "<$symfile"   die "$!";
# "parse" the symbols file
my %count=();
# skip first line;
$_=<SYMBOLS>;
while(<SYMBOLS>) 
  chomp();
  s/^\s*([^\@]+)\@.*$/$1/;
  $count $_ =0;
 
# check the rest of the command line arguments for matches against symbols. Omega(n^2), sigh.
foreach my $file (@ARGV) 
  my $string=read_file($file);
  foreach my $sym (keys %count) 
    if ($string =~ m/\b$sym\b/) 
      $count $sym ++;
     
   
 
print "Symbol\t Count\n";
foreach my $sym (sort  $count $a  <=> $count $b  (keys %count)) 
  print "$sym\t$count $sym \n";
 

3 February 2009

David Bremner: source-highlight and oz

In order to have pretty highlighted oz code in HTML and TeX, I defined a simple language definition "oz.lang"
keyword = "andthen at attr case catch choice class cond",
          "declare define dis div do else elsecase ",
          "elseif elseof end fail false feat finally for",
          "from fun functor if import in local lock meth",
          "mod not of or orelse prepare proc prop raise",
          "require self skip then thread true try unit"
meta delim "<" ">"
cbracket = " "
comment start "%"
symbol = "~","*","(",")","-","+","=","[","]","#",":",
       ",",".","/","?","&","<",">","\ "
atom delim "'" "'"  escape "\\"
atom = '[a-z][[:alpha:][:digit:]]*'
variable delim " " " "  escape "\\"
variable = '[A-Z][[:alpha:][:digit:]]*'
string delim "\"" "\"" escape "\\"
The meta tags are so I can intersperse EBNF notation in with oz code. Unfortunately source-highlight seems a little braindead about e.g. environment variables, so I had to wrap the invocation in a script
#!/bin/sh
HLDIR=$HOME/config/source-highlight
source-highlight --style-file=$HLDIR/default.style --lang-map=$HLDIR/lang.map $*
The final pieces of the puzzle is a customized lang.map file that tells source-highlight to use "oz.lang" for "foo.oz" and a default.style file that defines highlighting for "meta" text.

26 December 2008

David Bremner: Prolegomenon to any future tg-buildpackage

So I have been getting used to madduck's workflow for topgit and debian packaging, and one thing that bugged me a bit was all the steps required to to build. I tend to build quite a lot when debugging, so I wrote up a quick and dirty script to I don't claim this is anywhere ready production quality, but maybe it helps someone. Assumptions (that I remember) Here is the actual script:
#!/bin/sh
set -x
if [ x$1 = x-k ]; then
    keep=1
else
    keep=0
fi
WORKROOT=/tmp
WORKDIR= mktemp -d $WORKROOT/tg-debuild-XXXX 
# yes, this could be nicer
SOURCEPKG= dpkg-parsechangelog   grep ^Source:   sed 's/^Source:\s*//' 
UPSTREAM= dpkg-parsechangelog   grep ^Version:   sed -e 's/^Version:\s*//' -e s/-[^-]*// 
ORIG=$WORKDIR/$ SOURCEPKG _$ UPSTREAM .orig.tar.gz
pristine-tar checkout $ORIG
WORKTREE=$WORKDIR/$SOURCEPKG-$UPSTREAM
CDUP= git rev-parse --show-cdup 
GDPATH=$PWD/$CDUP/.git
DEST=$PWD/$CDUP/../build-area
git archive --prefix=$WORKTREE/ --format=tar master   tar xfP -
GIT_DIR=$GDPATH make -C $WORKTREE -f debian/rules tg-export
cd $WORKTREE && GIT_DIR=$GDPATH debuild 
if [ $?==0 -a -d $DEST ]; then
    cp $WORKDIR/*.deb $WORKDIR/*.dsc $WORKDIR/*.diff.gz $WORKDIR/*.changes $DEST
fi
if [ $keep = 0 ]; then
    rm -fr $WORKDIR
fi

24 December 2008

David Bremner: So your topgit patch was merged upstream

Scenario You are maintaining a debian package with topgit. You have a topgit patch against version k and it is has been merged into upstream version m. You want to "disable" the topgit branch, so that patches are not auto-generated, but you are not brave enough to just
   tg delete feature/foo
You are brave enough to follow the instructions of a random blog post. Checking your patch has really been merged upstream This assumes that you tags upstream/j for version j.
git checkout feature/foo
git diff upstream/k
For each file foo.c modified in the output about, have a look at
git diff upstream/m foo.c
This kindof has to be a manual process, because upstream could easily have modified your patch (e.g. formatting). The semi-destructive way Suppose you really never want to see that topgit branch again.
git update-ref -d refs/topbases/feature/foo
git checkout master
git branch -M feature/foo merged/foo
The non-destructive way. After I worked out the above, I realized that all I had to do was make an explicit list of topgit branches that I wanted exported. One minor trick is that the setting seems to have to go before the include, like this
TG_BRANCHES=debian/bin-makefile debian/libtoolize-lib debian/test-makefile
-include /usr/share/topgit/tg2quilt.mk
Conclusions I'm not really sure which approach is best yet. I'm going to start with the non-destructive one and see how that goes. Updated Madduck points to a third, more sophisticated approach in Debian BTS.

22 December 2008

David Bremner: A topgit testimonial

I wanted to report a success story with topgit which is a rather new patch queue managment extension for git. If that sounds like gibberish to you, this is probably not the blog entry you are looking for. Some time ago I decided to migrate the debian packaging of bibutils to topgit. This is not a very complicated package, with 7 quilt patches applied to upstream source. Since I don't have any experience to go on, I decided to follow Martin 'madduck' Krafft's suggestion for workflow. It all looks a bit complicated (madduck will be the first to agree), but it forced me to think about which patches were intended to go upstream and which were not. At the end of the conversion I had 4 patches that were cleanly based on upstream, and (perhaps most importantly for lazy people like me), I could send them upstream with tg mail. I did that, and a few days later, Chris Putnam sent me a new upstream release incorporating all of those patches. Of course, now I have to package this new upstream release :-). The astute reader might complain that this is more about me developing half-decent workflow, and Chris being a great guy, than about any specific tool. That may be true, but one thing I have discovered since I started using git is that tools that encourage good workflow are very nice. Actually, before I started using git, I didn't even use the word workflow. So I just wanted to give a public thank you to pasky for writing topgit and to madduck for pushing it into debian, and thinking about debian packaging with topgit.

3 December 2008

David Bremner: Using GLPK from C++

Recently I suggested to some students that they could use the Gnu Linear Programming Toolkit from C++. Shortly afterwards I thought I had better verify that I had not just sent people on a hopeless mission. To test things out, I decided to try using GLPK as part of an ongoing project with Lars Schewe The basic idea of this example is to use glpk to solve an integer program with row generation. The main hurdle (assuming you want to actually write object oriented c++) is how to make the glpk callback work in an object oriented way. Luckily glpk provides a pointer "info" that can be passed to the solver, and which is passed back to the callback routine. This can be used to keep track of what object is involved. Here is the class header
#ifndef GLPSOL_HH
#define GLPSOL_HH
#include "LP.hh"
#include "Vektor.hh"
#include "glpk.h"
#include "combinat.hh"
namespace mpc  
  class  GLPSol : public LP  
  private:
    glp_iocp parm;
    static Vektor<double> get_primal_sol(glp_prob *prob);
    static void callback(glp_tree *tree, void *info);
    static int output_handler(void *info, const char *s);
  protected:
    glp_prob *root;
  public:
    GLPSol(int columns);
    ~GLPSol()  ;
    virtual void rowgen(const Vektor<double> &candidate)  ;
    bool solve();
    bool add(const LinearConstraint &cnst);
   ;
 
#endif
The class LP is just an abstract base class (like an interface for java-heads) defining the add method. The method rowgen is virtual because it is intended to be overridden by a subclass if row generation is actually required. By default it does nothing. Notice that the callback method here is static; that means it is essentially a C function with a funny name. This will be the function that glpk calls when it wants from help.
#include <assert.h>
#include "GLPSol.hh"
#include "debug.hh"
namespace mpc 
  GLPSol::GLPSol(int columns)  
    // redirect logging to my handler
    glp_term_hook(output_handler,NULL);
    // make an LP problem
    root=glp_create_prob();
    glp_add_cols(root,columns);
    // all of my variables are binary, my objective function is always the same
    //  your milage may vary
    for (int j=1; j<=columns; j++) 
      glp_set_obj_coef(root,j,1.0);
      glp_set_col_kind(root,j,GLP_BV);
     
    glp_init_iocp(&parm);
    // here is the interesting bit; we pass the address of the current object
    // into glpk along with the callback function
    parm.cb_func=GLPSol::callback;
    parm.cb_info=this;
   
  int GLPSol::output_handler(void *info, const char *s) 
    DEBUG(1) << s;
    return 1;
   
  Vektor<double> GLPSol::get_primal_sol(glp_prob *prob) 
    Vektor<double> sol;
    assert(prob);
    for (int i=1; i<=glp_get_num_cols(prob); i++) 
      sol[i]=glp_get_col_prim(prob,i);
     
    return sol;
   
  // the callback function just figures out what object called glpk and forwards
  // the call. I happen to decode the solution into a more convenient form, but 
  // you can do what you like
  void GLPSol::callback(glp_tree *tree, void *info) 
    GLPSol *obj=(GLPSol *)info;
    assert(obj);
    switch(glp_ios_reason(tree)) 
    case GLP_IROWGEN:
      obj->rowgen(get_primal_sol(glp_ios_get_prob(tree)));
      break;
    default:
      break;
     
   
  bool GLPSol::solve(void)    
    int ret=glp_simplex(root,NULL);
    if (ret==0) 
      ret=glp_intopt(root,&parm);
    if (ret==0)
      return (glp_mip_status(root)==GLP_OPT);
    else
      return false;
   
  bool GLPSol::add(const LinearConstraint&cnst) 
    int next_row=glp_add_rows(root,1);
    // for mysterious reasons, glpk wants to index from 1
    int indices[cnst.size()+1];
    double coeff[cnst.size()+1];
    DEBUG(3) << "adding " << cnst << std::endl;
    int j=1;
    for (LinearConstraint::const_iterator p=cnst.begin();
         p!=cnst.end(); p++) 
      indices[j]=p->first;
      coeff[j]=(double)p->second;
      j++;
     
    int gtype=0;
    switch(cnst.type()) 
    case LIN_LEQ:
      gtype=GLP_UP;
      break;
    case LIN_GEQ:
      gtype=GLP_LO;
      break;
    default:
      gtype=GLP_FX;
     
    glp_set_row_bnds(root,next_row,gtype,       
                       (double)cnst.rhs(),(double)cnst.rhs());
    glp_set_mat_row(root,
                    next_row,
                    cnst.size(),
                    indices,
                    coeff);
    return true;
   
 
All this is a big waste of effort unless we actually do some row generation. I'm not especially proud of the crude rounding I do here, but it shows how to do it, and it does, eventually solve problems.
#include "OMGLPSol.hh"
#include "DualGraph.hh"
#include "CutIterator.hh"
#include "IntSet.hh"
namespace mpc 
  void OMGLPSol::rowgen(const Vektor<double>&candidate) 
    if (diameter<=0) 
      DEBUG(1) << "no path constraints to generate" << std::endl;
      return;
     
    DEBUG(3) << "Generating paths for " << candidate << std::endl;
  // this looks like a crude hack, which it is, but motivated by the
  // following: the boundary complex is determined only by the signs
  // of the bases, which we here represent as 0 for - and 1 for +
    Chirotope chi(*this);
    for (Vektor<double>::const_iterator p=candidate.begin();
         p!=candidate.end(); p++) 
      if (p->second > 0.5)  
        chi[p->first]=SIGN_POS;
        else  
        chi[p->first]=SIGN_NEG;
       
     
    BoundaryComplex bc(chi);
    DEBUG(3) << chi;
    DualGraph dg(bc);
    CutIterator pathins(*this,candidate);
    int paths_found=
      dg.all_paths(pathins,
                   IntSet::lex_set(elements(),rank()-1,source_facet),
                   IntSet::lex_set(elements(),rank()-1,sink_facet),
                   diameter-1);
    DEBUG(1) << "row generation found " << paths_found << " realized paths\n";
    DEBUG(1) << "effective cuts: " << pathins.effective() << std::endl;
   
  void OMGLPSol::get_solution(Chirotope &chi)  
    int nv=glp_get_num_cols(root);
    for(int i=1;i<=nv;++i)  
      int val=glp_mip_col_val(root,i);
      chi[i]=(val==0 ? SIGN_NEG : SIGN_POS);
     
   
 
So ignore the problem specific way I generate constraints, the key remaining piece of code is CutIterator which filters the generated constraints to make sure they actually cut off the candidate solution. This is crucial, because row generation must not add constraints in the case that it cannot improve the solution, because glpk assumes that if the user is generating cuts, the solver doesn't have to.
#ifndef PATH_CONSTRAINT_ITERATOR_HH
#define PATH_CONSTRAINT_ITERATOR_HH
#include "PathConstraint.hh"
#include "CNF.hh"
namespace mpc  
  class CutIterator : public std::iterator<std::output_iterator_tag,
                                                      void,
                                                      void,
                                                      void,
                                                      void> 
  private:
    LP& _list;
    Vektor<double> _sol;
    std::size_t _pcount;
    std::size_t _ccount;
  public:
    CutIterator (LP& list, const Vektor<double>& sol) : _list(list),_sol(sol), _pcount(0), _ccount(0)  
    CutIterator& operator=(const Path& p)  
      PathConstraint pc(p);
      _ccount+=pc.appendTo(_list,&_sol);
      _pcount++;
      if (_pcount %10000==0)  
        DEBUG(1) << _pcount << " paths generated" << std::endl;
       
      return *this;
     
    CutIterator& operator*()  return *this; 
    CutIterator& operator++()  return *this; 
    CutIterator& operator++(int)  return *this; 
    int effective()   return _ccount;  ;
   ;
 
#endif
Oh heck, another level of detail; the actual filtering actually happens in the appendTo method the PathConstraint class. This is just computing the dot product of two vectors. I would leave it as an exercise to the readier, but remember some fuzz is neccesary to to these kinds of comparisons with floating point numbers. Eventually, the decision is made by the following feasible method of the LinearConstraint class.
 bool feasible(const
Vektor<double> & x)  double sum=0; for (const_iterator
p=begin();p!=end(); p++)  sum+= p->second*x.at(p->first);  
      switch (type()) 
      case LIN_LEQ:
        return (sum <= _rhs+epsilon);
      case LIN_GEQ:
        return (sum >= _rhs-epsilon);
    default:
      return (sum <= _rhs+epsilon) &&
        (sum >= _rhs-epsilon);
       
     

10 November 2008

David Bremner: Using Org Mode as a time tracker

I have been meaning to fix this up for a long time, but so far real work keeps getting in the way. The idea is that C-C t brings you to this week's time tracker buffer, and then you use (C-c C-x C-i/C-c C-x C-o) to start and stop timers. The only even slightly clever is stopping the timer and saving on quitting emacs, which I borrowed from the someone on the net. Oh, and one of the things I meant to fix was the dependence on mhc. Sorry. Here is the snippet from my .emacs
(require 'org-timetracker)
(setq   ott-file
  (expand-file-name 
    (let ((now (mhc-date-now)))
      (format "~/.org/%04d/%02d.org" 
         (mhc-date-yy now) (mhc-date-cw now)))))
It needs emacs restarted once a week, in order to pick up the new file name. The main guts of the hack are here. The result might look like the this (works better in emacs org-mode. C-c C-x C-d for a summary)

25 October 2008

David Bremner: Tunnel versus MUX: a race to the git

Third in a series (git-sync-experiments, git-sync-experiments2) of completely unscientific experiments to try and figure out the best way to sync many git repos. If you want to make many ssh connections to a given host, then the first thing you need to do is turn on multiplexing. See the ControlPath and ControlMaster options in ssh config Presuming that is not fast enough, then one option is to make many parallel connections (see e.g. git-sync-experiments2). But this won't scale very far. In this week I consider the possibilities of running a tunneled socket to a remote git-daemon
ssh  -L 9418:localhost:9418 git-host.domain.tld git-daemon --export-all
Of course from a security point of view this is awful, but I did it anyway, at least temporarily. Running my "usual" test of git pull in 15 up-to-date repos, I get 3.7s versus about 5s with the multiplexing. So, 20% improvement, probably not worth the trouble. In both cases I just run a shell script like
  cd repo1 && git pull && cd ..
  cd repo2 && git pull && cd ..
  cd repo3 && git pull && cd ..
  cd repo4 && git pull && cd ..
  cd repo5 && git pull && cd ..

8 October 2008

David Bremner: The mailbox plugin for ikiwiki

In a recent blog post, Kai complained about various existing tools for marking up email in HTML. He also asked for pointers to other tools. Since he didn't specify good tools :-), I took the opportunity to promote my work in progress plugin for ikiwiki to do that very thing. When asked about demo sites, I realized that my blog doesn't actually use threaded comments, yet, so made a poor demo. Follow the link and you will find one of the mailboxes from the distribution, mostly a few posts from one of the debian lists. The basic idea is to use the Email::Thread perl module to get a forest of thread trees, and then walk those generating output. I think it would be fairly easy to make a some kind of mutt-like index using the essentially same tree walking code. Not that I'm volunteering immediately mind you, I have to replys to comments on my blog working (which is the main place I use this plugin right now).

26 September 2008

David Bremner: Can I haz a distributed news reader?

RSS readers are better than obsessively checking 18 or so web sites myself, but they seem to share one very annoying feature. They assume I read news/rss on only one machine, and I have to manually mark off which articles I have already read on a different machine. Similarly, nntp readers (that I know about) have only a local idea of what is read and not read. For me, this makes reading high volume lists via gmane almost unbearable. Am I the only one that wastes time on more than one computer?

20 September 2008

David Bremner: managing many git repos

I have been thinking about ways to speed multiple remote git on the same hosts. My starting point is mr, which does the job, but is a bit slow. I am thinking about giving up some generality for some speed. In particular it seems like it ought to be possible to optimize for the two following use cases: For my needs, mr is almost fast enough, but I can see it getting annoying as I add repos (I currently have 11, and mr update takes about 5 seconds; I am already running ssh multiplexing). I am also thinking about the needs of the Debian Perl Modules Team, which would have over 900 git repos if the current setup was converted to one git repo per module. My first attempt, using perl module Net::SSH::Expect to keep an ssh channel open can be scientifically classified as "utter fail", since Net::SSH::Expect takes about 1 second to round trip "/bin/true". Initial experiments using IPC::PerlSSH are more promising. The following script grabs the head commit in 11 repos in about 0.5 seconds. Of course, it still doesn't do anything useful, but I thought I would toss this out there in case there already exists a solution to this problem I don't know about.
 
#!/usr/bin/perl
use IPC::PerlSSH;
use Getopt::Std;
use File::Slurp;
my %config; 
eval( "\%config=(".read_file(shift(@ARGV)).")");
die "reading configuration failed: $@" if $@;
my $ips= IPC::PerlSSH->new(Host=>$config host );
$ips->eval("use Git");
$ips->store( "ls_remote", q my $repo=shift;
                       return Git::command_oneline('ls-remote',$repo,'HEAD');
                            );
foreach $repo (@ $config repos ) 
    print $ips->call("ls_remote",$repo);
 
P.S. If you google for "mr joey hess", you will find a Kiss tribute band called Mr. Speed, started by Joe Hess" P.P.S. Hello planet debian!

David Bremner: managing many git repos II

In a previous post I complained that mr was too slow. madduck pointed me to the "-j" flag, which runs updates in parallel. With -j 5, my 11 repos update in 1.2s, so this is probably good enough to put this project on the back burner until I get annoyed again. I have the feeling that the "right solution" (TM) involves running either git-daemon or something like it on the remote host. The concept would be to set up a pair of file descriptors connected via ssh to the remote git-daemon, and have your local git commands talk to that pair of file descriptors instead of a socket. Alas, that looks like a bit of work to do, if it is even possible.

11 April 2007

MJ Ray: Satellite TV and Media Links for 2007-04-11

Ofcom publishes research on media literacy
Analysis of a research report: "Certainly a useful document to have on hand, if only to deconstruct."
Karen Bremner - The Apprentice Series 2
A new art form: go on a reality TV show, start blogging to raise profile, get hired by Old Media, then leave the site to bit-rot. How many of these "reality TV star headstones" sites are there now?
predicts problems and confusion for consumers
If anyone doubts how much big media gets bad PR from trying to use their big influence to direct the market (such as making their films only available in their format), check the first comment: "I'm waiting for the corporate idiots to decide"
xmltv, was: [Debian-uk] Getting wired in Manchester
Another thing on my "must try that when I get my DVR built" list.
theLargePrint.com The New Media Landscape and the BBC's Lost Horizon
Nice criticism of BBC's link-up with anti-competitors. I recently wrote about the BBC-Google link, which raises some similar problems.
BBC NEWS Business Fewer charges for website content
Figures from 2006 (I'll do media linkposts more often in future).
Time to read the papers?
I only get time to read one paper per week, unless I'm travelling.
Number 10: Tony Blair Interview with Stephen Fry
Fantasy TV shows number 10: A bit of Fry and Blairie.
BBC NEWS Technology Broadband switching set to ease
They say new rules came in on 14 February, but sites like moneySavingExpert.com are still full of tales of woe.
We the undersigned petition the Prime Minister to ban tv competitions/quizzes which charge extortionate caller rates (eg The Mint).
Sign this to clear UK digital TV of all these scam quizzes. I see that after just a taste of humble pie, itv play is back on air overnight on itv1.

Next.

Previous.